-
Notifications
You must be signed in to change notification settings - Fork 18
Feat/mt resolve deposit tests #198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughAdds refund-log length checks to NEP-245 event emission, exposes a raw mt_on_transfer API returning ExecutionFinalResult, extends sandbox APIs/fixture handling, augments receiver test stubs and long-gas NEP-245 tests, and adds CI/test feature orchestration for long tests. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Tester as Tester
participant Sandbox as Sandbox (SigningAccount)
participant Runtime as Runtime
participant Receiver as Receiver Contract/WASM
Tester->>Sandbox: call mt_on_transfer_raw(msg)
Sandbox->>Runtime: submit transaction / create receipt
Runtime->>Receiver: execute mt_on_transfer (callback)
Receiver-->>Runtime: return value (possibly large bytes / refunds)
Runtime-->>Sandbox: ExecutionFinalResult (receipt outcomes)
Sandbox-->>Tester: return ExecutionFinalResult
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tests/src/tests/defuse/tokens/nep245/mod.rs`:
- Around line 17-28: The binary-search loop can underflow when mid == 0 and
test(mid) returns Err; replace the direct subtraction hi = mid - 1 with a
guarded update so that if mid == 0 you break the loop (no lower index exists)
otherwise set hi = mid - 1 (or hi = mid.saturating_sub(1) combined with an if
mid == 0 { break; } check). Update the match arm for Err(_) to use this guard
referencing lo, hi, mid, test and best so the loop cannot wrap to usize::MAX.
🧹 Nitpick comments (4)
tests/src/tests/defuse/tokens/nep245/mt_transfer_resolve_gas.rs (1)
278-291: Test coverage forbinary_search_maxis good but doesn't cover edge case.The test validates all values from 0 to 99, which is comprehensive. However, note that this test only covers cases where at least
test(0)succeeds (sincelimitranges from 0 to 99 andtest(x)succeeds whenx <= limit).Consider adding a test case where all values fail to ensure the underflow issue mentioned in
mod.rsis covered:✨ Additional test case
// Test case where all values fail let test_all_fail = |_x| async { Err(anyhow::anyhow!("always fail")) }; assert_eq!(binary_search_max(0, max, test_all_fail).await, None);sandbox/Cargo.toml (1)
21-21: Add libc to workspace dependencies for consistency.The
libccrate is used in the sandbox code (libc::atexitinsandbox/src/lib.rs) but is specified with a direct version instead of using theworkspace = truepattern like all other dependencies. Declare it in the workspace[dependencies]section to maintain consistency across the codebase.sandbox/src/lib.rs (1)
115-140: Potential cascading test failures on poisoned mutex.If a test panics while the sandbox fixture is being created (before the lock is released), subsequent tests will fail with a panic on line 126's
.unwrap()due to mutex poisoning. Consider recovering from a poisoned mutex or documenting this behavior.That said, for test infrastructure where a panic during setup indicates a fundamental issue, this is often acceptable.
♻️ Optional: Handle poisoned mutex gracefully
let (sandbox_arc, root_account) = mutex .lock() - .unwrap() + .unwrap_or_else(|poisoned| poisoned.into_inner()) .as_ref() .map(|shared| (shared.sandbox.clone(), shared.root.clone())) .unwrap();tests/src/tests/defuse/tokens/nep245/mt_deposit_resolve_gas.rs (1)
74-85: Potential underflow in account name length calculation.If
root_id_len >= 63, the subtractionTARGET_LEN - 1 - root_id_lenwill underflow (wrap around to a large value), causing the subaccount creation to likely fail or behave unexpectedly.In practice, the sandbox root ID (
testor similar) is short, so this won't trigger. However, adding a guard would make the code more robust.♻️ Optional: Add a guard for name length calculation
TokenIdGenerationMode::Medium => { // Use a 64-char named account: {name}.{root_id} = 64 chars total const TARGET_LEN: usize = 64; let root_id_len = env.root().id().as_str().len(); // name_len + 1 (dot) + root_id_len = TARGET_LEN + assert!( + root_id_len < TARGET_LEN - 1, + "root_id too long for medium mode" + ); let name_len = TARGET_LEN - 1 - root_id_len;
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (8)
sandbox/Cargo.tomlsandbox/src/extensions/mt.rssandbox/src/lib.rssandbox/src/tx/wrappers.rstests/contracts/multi-token-receiver-stub/src/lib.rstests/src/tests/defuse/tokens/nep245/mod.rstests/src/tests/defuse/tokens/nep245/mt_deposit_resolve_gas.rstests/src/tests/defuse/tokens/nep245/mt_transfer_resolve_gas.rs
🧰 Additional context used
🧬 Code graph analysis (3)
tests/src/tests/defuse/tokens/nep245/mt_deposit_resolve_gas.rs (3)
tests/src/tests/defuse/tokens/nep245/mod.rs (4)
binary_search_max(8-31)None(996-996)None(1187-1187)None(1558-1558)test-utils/src/random.rs (2)
random(82-84)gen_random_string(121-127)sandbox/src/extensions/mt.rs (2)
tx(114-127)tx(170-179)
tests/src/tests/defuse/tokens/nep245/mod.rs (2)
tests/src/tests/defuse/tokens/nep245/mt_deposit_resolve_gas.rs (2)
mt_deposit_resolve_gas(312-389)None(336-336)tests/src/tests/defuse/tokens/nep245/mt_transfer_resolve_gas.rs (1)
mt_transfer_resolve_gas(232-276)
tests/src/tests/defuse/tokens/nep245/mt_transfer_resolve_gas.rs (1)
tests/src/tests/defuse/tokens/nep245/mod.rs (1)
binary_search_max(8-31)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
- GitHub Check: Build
- GitHub Check: Build Reproducible
- GitHub Check: Security Audit - deny
- GitHub Check: Shared / Static Analysis (intents-poa-factory, contract, near, poa-factory, 1, ., 1.18, false, --filter-pa...
- GitHub Check: Shared / Static Analysis (intents-contract, contract, near, defuse, 1, ., 1.18, false, --filter-paths "nod...
- GitHub Check: Shared / Static Analysis (intents-poa-token, contract, near, poa-token, 1, ., 1.18, false, --filter-paths ...
- GitHub Check: Shared security analysis / Static Analysis (intents-poa-token, contract, near, poa-token, 1, ., 1.18, false, --filter-paths ...
- GitHub Check: Shared security analysis / Static Analysis (intents-contract, contract, near, defuse, 1, ., 1.18, false, --filter-paths "nod...
- GitHub Check: Shared security analysis / Static Analysis (intents-poa-factory, contract, near, poa-factory, 1, ., 1.18, false, --filter-pa...
- GitHub Check: Security Audit - report
🔇 Additional comments (16)
sandbox/src/tx/wrappers.rs (1)
49-58: LGTM!The truncation logic for large byte payloads is well-implemented. The threshold of 32 bytes and showing first/last 16 bytes provides a good balance between readability and debugging utility.
sandbox/src/extensions/mt.rs (1)
184-206: LGTM with minor duplication note.The implementation correctly mirrors
mt_on_transferwhile returning the rawExecutionFinalResult. The code duplication betweenmt_on_transferandmt_on_transfer_raw(lines 165-177 vs 191-203) is acceptable given the different return paths, though a shared helper could reduce this in the future if more*_rawvariants are added.tests/src/tests/defuse/tokens/nep245/mt_transfer_resolve_gas.rs (1)
1-1: LGTM!The import correctly references the refactored
binary_search_maxfrom the parent module. This consolidation reduces code duplication across test files.tests/contracts/multi-token-receiver-stub/src/lib.rs (4)
9-16: LGTM!The raw extern function correctly bypasses SDK serialization to return arbitrary bytes for testing purposes. The conditional compilation to
wasm32is appropriate.
18-32: LGTM!The extension trait provides a clean way to chain the raw return value call onto Promise objects. Using
Gas::from_ggas(0)withGasWeight(1)correctly defers gas allocation to the runtime.
45-58: LGTM!The new enum variants are well-documented and cover important edge cases for testing: legitimate refunds, malicious overflow attempts, and large return payloads.
94-96: LGTM!The
ReturnByteshandler correctly creates a raw byte response via the stub mechanism. Thelen.0 as usizecast is safe here since the test uses values well withinu32range (~4MB), and this runs exclusively on wasm32.sandbox/src/lib.rs (3)
6-20: LGTM!The expanded imports and public re-exports provide appropriate concurrency primitives and convenient access to the extension traits for test consumers.
92-95: LGTM!The
fast_forwardmethod provides a useful wrapper for time-based testing scenarios. Theunwrap()is acceptable for test infrastructure.
97-107: LGTM!The
OnceCell<Mutex<Option<Sandbox>>>pattern correctly enables async initialization while allowing synchronous cleanup viaatexit. UsingOption::take()ensures proper ownership transfer during cleanup, and silently handling a poisoned mutex during shutdown is pragmatic.tests/src/tests/defuse/tokens/nep245/mt_deposit_resolve_gas.rs (6)
1-21: LGTM!The imports and the
TOTAL_LOG_LENGTH_LIMITconstant are appropriate for the test module's purpose.
23-51: LGTM!The
ReceiverTestModeenum provides a clean abstraction overMTReceiverModewith appropriate minimum gas values for each test scenario.
113-149: LGTM!The
make_token_idsfunction correctly generates token IDs of varying lengths. The string slicing is safe because both the index prefix and the random string consist of ASCII characters, ensuring single-byte UTF-8 encoding.
166-199: LGTM!The log size validation is a valuable pre-check that ensures test parameters won't exceed blockchain event log limits before executing the expensive on-chain operations.
207-302: LGTM!The test orchestration function is well-structured with clear separation between recoverable errors (for binary search) and critical failures (assertions for contract bugs). The comment on lines 271-273 clearly explains the rationale for the
assert!vsbaildistinction.
304-389: LGTM!The parametrized test provides comprehensive coverage of token ID generation modes and receiver behaviors. The binary search approach efficiently finds the maximum token count, and the minimum threshold assertion (50 tokens) ensures the contract maintains acceptable capacity.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| while lo <= hi { | ||
| let mid = lo + (hi - lo) / 2; | ||
| match test(mid).await { | ||
| Ok(()) => { | ||
| best = Some(mid); | ||
| lo = mid + 1; // success -> try higher | ||
| } | ||
| Err(_) => { | ||
| hi = mid - 1; // failure -> try lower | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential underflow when mid == 0 and test fails.
If low is 0 and test(0) returns Err, line 25 computes hi = mid - 1 which causes unsigned integer underflow (0 - 1 wraps to usize::MAX). This would cause the loop to continue with invalid bounds.
In the current usage with min_token_count = 1, this isn't triggered, but the function could fail unexpectedly if called with low = 0.
🐛 Proposed fix to handle underflow
match test(mid).await {
Ok(()) => {
best = Some(mid);
lo = mid + 1; // success -> try higher
}
Err(_) => {
- hi = mid - 1; // failure -> try lower
+ if mid == 0 {
+ break;
+ }
+ hi = mid - 1; // failure -> try lower
}
}Alternatively, use saturating_sub:
Err(_) => {
- hi = mid - 1; // failure -> try lower
+ hi = mid.saturating_sub(1); // failure -> try lower
+ if mid == 0 {
+ break;
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| while lo <= hi { | |
| let mid = lo + (hi - lo) / 2; | |
| match test(mid).await { | |
| Ok(()) => { | |
| best = Some(mid); | |
| lo = mid + 1; // success -> try higher | |
| } | |
| Err(_) => { | |
| hi = mid - 1; // failure -> try lower | |
| } | |
| } | |
| } | |
| while lo <= hi { | |
| let mid = lo + (hi - lo) / 2; | |
| match test(mid).await { | |
| Ok(()) => { | |
| best = Some(mid); | |
| lo = mid + 1; // success -> try higher | |
| } | |
| Err(_) => { | |
| if mid == 0 { | |
| break; | |
| } | |
| hi = mid - 1; // failure -> try lower | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@tests/src/tests/defuse/tokens/nep245/mod.rs` around lines 17 - 28, The
binary-search loop can underflow when mid == 0 and test(mid) returns Err;
replace the direct subtraction hi = mid - 1 with a guarded update so that if mid
== 0 you break the loop (no lower index exists) otherwise set hi = mid - 1 (or
hi = mid.saturating_sub(1) combined with an if mid == 0 { break; } check).
Update the match arm for Err(_) to use this guard referencing lo, hi, mid, test
and best so the loop cannot wrap to usize::MAX.
ec08d28 to
1f2f541
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@tests/contracts/multi-token-receiver-stub/src/lib.rs`:
- Around line 13-19: The stub_return_bytes function currently trusts the
incoming length and allocates based on it, risking huge allocations and silent
truncation; update stub_return_bytes to parse the length using a fallible
conversion (e.g., TryFrom/checked conversion instead of direct cast), enforce a
sensible upper bound (cap length to a constant MAX_RETURN_BYTES), and reject or
early-return (with a clear env::panic or no-op) when the parsed length is
invalid or exceeds the cap to avoid allocation/overflow; apply the same
defensive change to the similar code block referenced at lines 99-101 so both
places use try_from/checked conversion and the MAX_RETURN_BYTES cap before
allocating the Vec or calling env::value_return.
In `@tests/src/tests/defuse/tokens/nep245/mt_deposit_resolve_gas.rs`:
- Around line 203-223: The test currently only checks the resolve receipt when
defuse_outcomes.len() == 2, which can skip failures when there are more receipts
or fewer; change the condition to assert that defuse_outcomes.len() >= 2 and
always inspect the second receipt: obtain resolve_outcome =
defuse_outcomes[1].clone(), call resolve_result = resolve_outcome.into_result(),
and assert resolve_result.is_ok() with a clear error message including
resolve_result.err(); if the assertion for >= 2 fails, fail the test immediately
with a descriptive message about the missing resolve receipt (use symbols
defuse_outcomes, resolve_outcome, resolve_result, execution_result,
env.defuse.id() to locate the code).
♻️ Duplicate comments (1)
tests/src/tests/defuse/tokens/nep245/mod.rs (1)
17-28: Potential underflow whenmid == 0and test fails.If
lois 0 andtest(0)returnsErr, line 25 computeshi = mid - 1which causes unsigned integer underflow (0 - 1wraps tousize::MAX). This would cause the loop to continue with invalid bounds.While current usage with
min_token_count = 1avoids this, the function could fail unexpectedly if called withlow = 0.🐛 Proposed fix to handle underflow
match test(mid).await { Ok(()) => { best = Some(mid); lo = mid + 1; // success -> try higher } Err(_) => { + if mid == 0 { + break; + } hi = mid - 1; // failure -> try lower } }
🧹 Nitpick comments (2)
tests/src/tests/defuse/tokens/nep245/mt_transfer_resolve_gas.rs (1)
395-424: Consider consolidating log size calculation helpers.There's some duplication between
calculate_log_sizes(lines 395-424) andvalidate_mt_batch_transfer_log_size(lines 83-106). Both construct similarMtTransferEventstructures for size calculation.♻️ Suggested consolidation
You could refactor to have a single helper that returns both sizes:
-fn validate_mt_batch_transfer_log_size( - sender_id: &AccountId, - receiver_id: &AccountId, - token_ids: &[String], - amounts: &[u128], -) -> anyhow::Result<usize> { - let mt_transfer_event = MtEvent::MtTransfer(Cow::Owned(vec![MtTransferEvent { - authorized_id: None, - old_owner_id: Cow::Borrowed(receiver_id), - new_owner_id: Cow::Borrowed(sender_id), - token_ids: Cow::Owned(token_ids.to_vec()), - amounts: Cow::Owned(amounts.iter().copied().map(U128).collect()), - memo: Some(Cow::Borrowed("refund")), - }])); - - let longest_transfer_log = mt_transfer_event.to_nep297_event().to_event_log(); - - anyhow::ensure!( - longest_transfer_log.len() <= TOTAL_LOG_LENGTH_LIMIT, - "transfer log will exceed maximum log limit" - ); - - Ok(longest_transfer_log.len()) -} +fn validate_mt_batch_transfer_log_size( + sender_id: &AccountId, + receiver_id: &AccountId, + token_ids: &[String], + amounts: &[u128], +) -> anyhow::Result<usize> { + let (_, refund_size) = calculate_log_sizes(sender_id, receiver_id, token_ids, amounts); + anyhow::ensure!( + refund_size <= TOTAL_LOG_LENGTH_LIMIT, + "transfer log will exceed maximum log limit" + ); + Ok(refund_size) +}sandbox/src/lib.rs (1)
92-94: Consider returningResultfromfast_forwardinstead of panicking.Since this is a public API, avoid
unwrap()so callers can handle errors with context; good to lock this in now while the method is new.♻️ Suggested signature change
- pub async fn fast_forward(&self, blocks: u64) { - self.sandbox.fast_forward(blocks).await.unwrap(); - } + pub async fn fast_forward(&self, blocks: u64) -> anyhow::Result<()> { + self.sandbox.fast_forward(blocks).await.map_err(Into::into) + }
4cf4f66 to
89ea3f2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In @.github/workflows/ci.yml:
- Around line 165-166: The deny job currently installs cargo-audit without
version pinning causing drift vs the report job; update the cargo install
command in the security_audit_deny job to match the report job by pinning
cargo-audit to "^0.22" and keeping the --locked flag (i.e., replace the unpinned
install invocation with the same `cargo install cargo-audit --version "^0.22"
--locked` used in the report job).
In `@Makefile.toml`:
- Around line 211-213: The run-tests-long cargo task (tasks.run-tests-long)
currently passes --features=long to the entire workspace, which fails because
most crates don't define a long feature; update the task to either scope the
cargo invocation to the crates that actually support the long feature by adding
package selectors (use -p or --package with the crate names that define long) or
remove the workspace-wide --features=long and instead run cargo test with
--features=long per-package for those crates; modify the args array for
tasks.run-tests-long to use targeted package flags (e.g., -p/<crate>) or change
the command to iterate only over the crates that expose the long feature so the
test run no longer fails.
In `@near-utils/src/event.rs`:
- Around line 1-3: Update the doc comment for the constant
TOTAL_LOG_LENGTH_LIMIT to state it is the maximum total length of all log
entries in a single execution (in bytes) rather than a per-entry limit; locate
the pub const TOTAL_LOG_LENGTH_LIMIT: usize = 16384; declaration in event.rs and
replace the misleading comment with a line like "Maximum total length of all log
entries in a single execution (in bytes)." and keep the existing GitHub
reference link.
In `@tests/src/tests/defuse/tokens/nep245/mt_deposit_resolve_gas.rs`:
- Around line 373-374: The test function name has a typo: change the function
identifier
mt_desposit_resolve_can_handle_large_blob_value_returned_from_notification to
mt_deposit_resolve_can_handle_large_blob_value_returned_from_notification;
update the async test function declaration (#[tokio::test] async fn ...) so the
symbol matches everywhere it may be referenced (test runner discovery and any
calls), ensuring only the spelling of "desposit" → "deposit" is modified and
other code and attributes remain unchanged.
♻️ Duplicate comments (1)
tests/src/tests/defuse/tokens/nep245/mt_deposit_resolve_gas.rs (1)
239-248: Ensure resolve receipt is always validated.The current check only validates when exactly 2 receipts exist. If there are more or fewer, the resolve callback validation is silently skipped. The second test at lines 423-427 uses the correct pattern with
assert!(defuse_outcomes.len() >= 2, ...).🔧 Proposed fix for consistent validation
- if defuse_outcomes.len() == 2 { - let resolve_outcome = defuse_outcomes[1].clone(); - let resolve_result = resolve_outcome.into_result(); - assert!( - resolve_result.is_ok(), - "CRITICAL: mt_resolve_deposit callback failed for token_count={token_count}! \ - This indicates insufficient gas allocation in the contract. Error: {:?}", - resolve_result.err() - ); - } + anyhow::ensure!( + defuse_outcomes.len() >= 2, + "Expected at least 2 defuse receipts, got {}", + defuse_outcomes.len() + ); + let resolve_outcome = defuse_outcomes[1].clone(); + let resolve_result = resolve_outcome.into_result(); + assert!( + resolve_result.is_ok(), + "CRITICAL: mt_resolve_deposit callback failed for token_count={token_count}! \ + This indicates insufficient gas allocation in the contract. Error: {:?}", + resolve_result.err() + );
🧹 Nitpick comments (1)
tests/src/tests/defuse/tokens/nep245/mt_deposit_resolve_gas.rs (1)
336-336: Useexpect()with a descriptive message.If
binary_search_maxreturnsNone(meaning evenmin_token_count=1failed), theunwrap()will panic without context, making debugging harder.♻️ Proposed fix
- let max_deposited_count = max_deposited_count.unwrap(); + let max_deposited_count = max_deposited_count + .expect("Failed to find any successful token count - even min_token_count=1 failed");
| # Require 0.22+ for CVSS 4.0 support (advisory-db now contains CVSS 4.0 entries) | ||
| run: cargo install cargo-audit --version "^0.22" --locked |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Locate all cargo-audit installs in the CI file
rg -n "cargo install cargo-audit" .github/workflows/ci.ymlRepository: near/intents
Length of output: 168
🏁 Script executed:
#!/bin/bash
# Get context around cargo-audit installations and identify which job each belongs to
rg -B 5 "cargo install cargo-audit" .github/workflows/ci.ymlRepository: near/intents
Length of output: 589
Align cargo-audit version across audit jobs.
The deny job installs cargo-audit without version pinning while the report job pins it to ^0.22 for CVSS 4.0 support. This creates version drift between the two jobs. Update the deny job to match:
♻️ Suggested update for security_audit_deny
- - name: Install Cargo Plugins
- run: cargo install cargo-audit --locked
+ - name: Install Cargo Plugins
+ run: cargo install cargo-audit --version "^0.22" --locked🤖 Prompt for AI Agents
In @.github/workflows/ci.yml around lines 165 - 166, The deny job currently
installs cargo-audit without version pinning causing drift vs the report job;
update the cargo install command in the security_audit_deny job to match the
report job by pinning cargo-audit to "^0.22" and keeping the --locked flag
(i.e., replace the unpinned install invocation with the same `cargo install
cargo-audit --version "^0.22" --locked` used in the report job).
| [tasks.run-tests-long] | ||
| command = "cargo" | ||
| args = ["test", "--workspace", "--all-targets", "--features=long", "${@}"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
python - <<'PY'
import glob, pathlib, tomllib
root = pathlib.Path("Cargo.toml")
if not root.exists():
raise SystemExit("Root Cargo.toml not found")
data = tomllib.loads(root.read_text())
members = data.get("workspace", {}).get("members", [])
if not members:
print("No workspace.members found in Cargo.toml")
raise SystemExit(0)
def expand(member):
if any(ch in member for ch in "*?[]"):
return [pathlib.Path(p) for p in glob.glob(member)]
return [pathlib.Path(member)]
missing = []
for m in members:
for path in expand(m):
cargo = path / "Cargo.toml"
if not cargo.exists():
print(f"Missing {cargo}")
continue
pkg = tomllib.loads(cargo.read_text())
features = pkg.get("features", {})
if "long" not in features:
missing.append(str(cargo))
if missing:
print("Packages missing feature 'long':")
print("\n".join(sorted(set(missing))))
else:
print("All workspace packages define feature 'long'")
PYRepository: near/intents
Length of output: 726
Fix run-tests-long task — most workspace members lack the long feature.
The command will fail when executed because 32 of 33 workspace members don't define the long feature (missing from admin-utils, auth-call, bitmap, borsh-utils, controller, core, crypto, deadline, decimal, defuse, erc191, escrow-swap, fees, io-utils, map-utils, near-utils, nep245, nep413, nep461, num-utils, poa-factory, poa-token, randomness, sandbox, sep53, serde-utils, test-utils, tip191, token-id, ton-connect, webauthn, wnear). Either add long feature definitions to all workspace crates, or scope the command to specific crates using -p <crate> or --package <crate>.
🤖 Prompt for AI Agents
In `@Makefile.toml` around lines 211 - 213, The run-tests-long cargo task
(tasks.run-tests-long) currently passes --features=long to the entire workspace,
which fails because most crates don't define a long feature; update the task to
either scope the cargo invocation to the crates that actually support the long
feature by adding package selectors (use -p or --package with the crate names
that define long) or remove the workspace-wide --features=long and instead run
cargo test with --features=long per-package for those crates; modify the args
array for tasks.run-tests-long to use targeted package flags (e.g., -p/<crate>)
or change the command to iterate only over the crates that expose the long
feature so the test run no longer fails.
| /// Maximum length of a single log entry in NEAR runtime. | ||
| /// See: <https://github.com/near/nearcore/blob/v2.5.0/runtime/near-vm-runner/src/logic/logic.rs#L42> | ||
| pub const TOTAL_LOG_LENGTH_LIMIT: usize = 16384; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
NEAR runtime TOTAL_LOG_LENGTH_LIMIT 16384
💡 Result:
"TOTAL_LOG_LENGTH_LIMIT = 16384" is the protocol/runtime limit on the total bytes of emitted logs for a single execution (a contract call/receipt). If a contract's logs exceed this (16,384 bytes) the runtime will fail the call with TotalLogLengthExceeded. This limit appears in the protocol/limit_config as max_total_log_length = 16384. [1][2]
Sources:
- NEAR protocol config / RPC example showing max_total_log_length = 16384. [1]
- Runtime error list including TotalLogLengthExceeded (the error raised when total log length exceeds the limit). [2]
🌐 Web query:
NEAR nearcore v2.5.0 logic.rs log length limit
💡 Result:
I couldn't find a specific reference to a "logic.rs" log-length limit in nearcore v2.5.0 from public docs or the nearcore repo with the search terms given. Could you paste the logic.rs snippet or point me to the file/line (GitHub URL or crate path) you mean? With that I can locate the exact limit and explain it.
🌐 Web query:
nearcore v2.5.0 runtime near-vm-runner logic.rs file structure
💡 Result:
Summary outline of logic.rs (runtime/near-vm-runner/src/logic/logic.rs) in nearcore v2.5.0 — high-level structure:
- Top-level imports and feature flags. [1]
- Type/enum definitions used by the runner (e.g., VMOutcome, PromiseResult, VMLogicError variants). [1]
- Traits & key public items re-exported by the crate (CompiledContract, VM, PreparedContract). [1]
- VMLogic implementation — the core struct that:
- stores VMContext and External handle,
- implements host-function handlers that expose near-vm-logic APIs to WASM (storage read/write, logs, promises, input/output, math/hash helpers, context getters, etc.). [2][3]
- Helpers for memory/register management and conversion between WASM memory and runtime types (read/write registers, serialize/deserialize). [1][3]
- Error types and conversion helpers (mapping VM/wasm traps to VMLogic errors). [1]
- Compilation/preparation glue points referenced by the runner (prepare/compile/cache interactions). [1]
- Unit tests for host functions and VM interactions (bottom of file or in nearby test modules). [1]
Sources / references:
- near-vm-runner crate docs (module overview and exported logic module). [1]
- Nomicon / Runtime spec describing near-vm-runner and VMLogic role. [2]
- Implementation reference pointer to logic.rs in the nearcore repo. [3]
🌐 Web query:
github.com/near/nearcore blob v2.5.0 near-vm-runner
💡 Result:
The near-vm-runner crate for nearcore v2.5.0 is at runtime/near-vm-runner in the near/nearcore repo — e.g. GitHub blob path: https://github.com/near/nearcore/tree/v2.5.0/runtime/near-vm-runner. [1][2]
Sources:
- near/nearcore repository. [1]
- near-vm-runner crate docs (near-vm-runner). [2]
🌐 Web query:
site:github.com/near/nearcore/blob/v2.5.0 runtime/near-vm-runner src/logic logic.rs line 42
💡 Result:
I tried to fetch that file/line from GitHub but ran into an error accessing the repository file. Do you want me to try again (or paste the file contents here)?
🌐 Web query:
NEAR protocol specification max_total_log_length specification
💡 Result:
The NEAR protocol specification sets max_total_log_length = 16384 (bytes). [1][2]
Sources:
[1] NEAR RPC protocol config (limit_config -> max_total_log_length).
[2] NEAR Protocol / Nomicon parameter listing.
🌐 Web query:
NEAR runtime "single log entry" vs "total log length" difference
💡 Result:
Do you mean NEAR Protocol's runtime (smart contract) log limits/fields — e.g., RPC responses that show "single_log_entry" vs "total_log_length" — or something else (explorer, node logs, or an SDK/runtime API)? If you confirm which NEAR component and where you saw those terms (link or JSON snippet), I’ll look up the exact difference and cite sources.
🌐 Web query:
nearcore log length limit single entry total execution
💡 Result:
Short answer: nearcore enforces a maximum of 100 log entries per execution and a maximum total log bytes of 16,384 (i.e. ~16 KiB) across that execution — there is no separate hard limit for a single log entry beyond those totals, so a single log can be as large as the total log budget (subject to UTF‑8 and other limits). [1][2]
Sources:
- NEAR protocol/nearcore limit_config showing max_number_logs = 100 and max_total_log_length = 16384. [1]
- NEAR forum discussion noting the 100 logs-per-execution limit. [2]
🏁 Script executed:
cat -n near-utils/src/event.rs | head -10Repository: near/intents
Length of output: 380
Fix documentation: 16384 is the total log length limit per execution, not per single log entry.
The constant value (16384 bytes) is correct per NEAR protocol spec, but the documentation comment is misleading. NEAR enforces a maximum of 100 log entries per execution with a combined total of 16,384 bytes — there is no separate hard limit on individual log entry size. Update the comment to clarify this is the total log budget for an entire execution:
/// Maximum total length of all log entries in a single execution (in bytes).
/// See: <https://github.com/near/nearcore/blob/v2.5.0/runtime/near-vm-runner/src/logic/logic.rs#L42>
🤖 Prompt for AI Agents
In `@near-utils/src/event.rs` around lines 1 - 3, Update the doc comment for the
constant TOTAL_LOG_LENGTH_LIMIT to state it is the maximum total length of all
log entries in a single execution (in bytes) rather than a per-entry limit;
locate the pub const TOTAL_LOG_LENGTH_LIMIT: usize = 16384; declaration in
event.rs and replace the misleading comment with a line like "Maximum total
length of all log entries in a single execution (in bytes)." and keep the
existing GitHub reference link.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@nep245/src/events.rs`:
- Around line 25-26: Remove the duplicate TOTAL_LOG_LENGTH_LIMIT constant
definition from nep245/src/events.rs and instead add an import: use
defuse_near_utils::TOTAL_LOG_LENGTH_LIMIT; so the module uses the centralized
constant; ensure any references to TOTAL_LOG_LENGTH_LIMIT in this file remain
unchanged and compile after removing the local pub const.
🧹 Nitpick comments (2)
nep245/src/events.rs (2)
44-49: Conservative overhead estimation may over-reject valid events.The function returns
max(REFUND_EXTRA_BYTES, memo_length), but when a memo already exists, the actual overhead is the difference between refund memo size and existing memo size, not the fullREFUND_EXTRA_BYTES.For example, if
memo = Some("refu")(4 chars):
- Original includes
,"memo":"refu"(~13 bytes)- Refund includes
,"memo":"refund"(~16 bytes)- Actual overhead: ~3 bytes, not 16
This conservative approach is safe (won't cause runtime failures) but may reject some valid events unnecessarily. If precision matters, consider computing the actual delta.
136-157: Consider using this accurate method invalidate_with_refund_overheadfor transfers.This method computes the exact refund log size, while
validate_with_refund_overheaduses the heuristiccompute_refund_overhead. ForMtTransferevents, using this accurate method would avoid over-rejecting valid events.The current approach is simpler and applies uniformly to all event types, so this is just a consideration if precision becomes important.
eaa27f3 to
0e68e57
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@nep245/src/events.rs`:
- Around line 44-83: The refund overhead calculation currently returns the memo
length instead of the extra bytes needed, causing overestimation; change
refund_log_extra_bytes_count so that when memo is None it returns
REFUND_EXTRA_BYTES (the full JSON field cost), and when memo is Some(m) it
returns the additional bytes required to replace the existing memo with "refund"
i.e. max(0, REFUND_STR_LEN.saturating_sub(m.len())); update tests that assert
overhead to expect this delta-based value; relevant symbols:
refund_log_extra_bytes_count, REFUND_EXTRA_BYTES, REFUND_STR_LEN,
compute_refund_overhead, and validate_with_refund_overhead.
♻️ Duplicate comments (2)
tests/src/tests/defuse/tokens/nep245/mt_deposit_resolve_gas.rs (2)
208-221: Always validate resolve receipt presence.The check only runs when
defuse_outcomes.len() == 2; if the count differs, resolve failures can be skipped. Assert>= 2and always inspect the second receipt.✅ Suggested fix
- if defuse_outcomes.len() == 2 { - let resolve_outcome = defuse_outcomes[1].clone(); - let resolve_result = resolve_outcome.into_result(); - assert!( - resolve_result.is_ok(), - "CRITICAL: mt_resolve_deposit callback failed for token_count={token_count}! \ - This indicates insufficient gas allocation in the contract. Error: {:?}", - resolve_result.err() - ); - } + anyhow::ensure!( + defuse_outcomes.len() >= 2, + "Expected at least 2 defuse receipts, got {}", + defuse_outcomes.len() + ); + let resolve_outcome = defuse_outcomes[1].clone(); + let resolve_result = resolve_outcome.into_result(); + assert!( + resolve_result.is_ok(), + "CRITICAL: mt_resolve_deposit callback failed for token_count={token_count}! \ + This indicates insufficient gas allocation in the contract. Error: {:?}", + resolve_result.err() + );
351-352: Fix typo in test name: “desposit” → “deposit”.✏️ Proposed fix
-async fn mt_desposit_resolve_can_handle_large_blob_value_returned_from_notification() { +async fn mt_deposit_resolve_can_handle_large_blob_value_returned_from_notification() {
41589fe to
d2724a6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tests/src/tests/defuse/tokens/nep245/mt_transfer_resolve_gas.rs`:
- Line 26: Remove the local declaration of TOTAL_LOG_LENGTH_LIMIT and import the
shared constant from defuse::nep245; specifically, delete the line "const
TOTAL_LOG_LENGTH_LIMIT: usize = 16384;" in mt_transfer_resolve_gas.rs and add a
use/import for TOTAL_LOG_LENGTH_LIMIT from the defuse::nep245 module (matching
how mt_deposit_resolve_gas.rs imports it) so the test reuses the publicly
exported constant instead of redefining it.
♻️ Duplicate comments (1)
nep245/src/events.rs (1)
16-17: Duplicate constant should be imported fromdefuse-near-utils.As flagged in previous reviews,
TOTAL_LOG_LENGTH_LIMITis already exported fromnear-utils/src/event.rs. Import it instead of redefining:use defuse_near_utils::TOTAL_LOG_LENGTH_LIMIT;
🧹 Nitpick comments (2)
tests/src/tests/defuse/tokens/nep245/mt_transfer_resolve_gas.rs (2)
320-327: Consider documenting the token count magic numbers.The generation of 65 max-length tokens plus 2 specific trigger tokens appears carefully calculated to hit the boundary condition. A brief comment explaining why these specific counts/lengths trigger the refund-log-exceeds-limit condition would improve maintainability:
// 65 max-length tokens keep transfer log under limit but push refund log over // The 2 additional tokens ensure the refund event exceeds TOTAL_LOG_LENGTH_LIMIT let token_ids: Vec<String> = (1..=65) ...
380-384: String-based error matching may be fragile.If the error message in the contract changes, this assertion will silently pass with incorrect behavior. Consider also checking for a specific error type/code if the contract API supports it, or add a comment noting this coupling.
// NOTE: This string must match the error message in defuse contract's mt_batch_transfer_call assert!( result_str.contains("too many tokens: refund log would exceed protocol limit"), ... );
| use std::{borrow::Cow, future::Future}; | ||
| use strum::IntoEnumIterator; | ||
|
|
||
| const TOTAL_LOG_LENGTH_LIMIT: usize = 16384; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Find all definitions of TOTAL_LOG_LENGTH_LIMIT in the codebase
rg -n "TOTAL_LOG_LENGTH_LIMIT" --type rust -B 1 -A 1Repository: near/intents
Length of output: 4904
🏁 Script executed:
# Check if there are any pub declarations of TOTAL_LOG_LENGTH_LIMIT
rg -n "pub\s+const\s+TOTAL_LOG_LENGTH_LIMIT|pub\s+use.*TOTAL_LOG_LENGTH_LIMIT" --type rustRepository: near/intents
Length of output: 280
🏁 Script executed:
# Check the structure of the test file and its directory
git ls-files tests/src/tests/defuse/tokens/nep245/ | head -20Repository: near/intents
Length of output: 277
🏁 Script executed:
# Check the structure of nep245 src directory
git ls-files | grep -E "nep245/src|near.*utils" | head -30Repository: near/intents
Length of output: 474
Import TOTAL_LOG_LENGTH_LIMIT from defuse::nep245 instead of redefining locally.
The constant is already publicly exported from nep245/src/events.rs and near-utils. Remove the local definition and import it to avoid duplication. A similar test file in the same directory (mt_deposit_resolve_gas.rs) already uses this pattern:
-const TOTAL_LOG_LENGTH_LIMIT: usize = 16384;
+use defuse::nep245::TOTAL_LOG_LENGTH_LIMIT;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const TOTAL_LOG_LENGTH_LIMIT: usize = 16384; | |
| use defuse::nep245::TOTAL_LOG_LENGTH_LIMIT; |
🤖 Prompt for AI Agents
In `@tests/src/tests/defuse/tokens/nep245/mt_transfer_resolve_gas.rs` at line 26,
Remove the local declaration of TOTAL_LOG_LENGTH_LIMIT and import the shared
constant from defuse::nep245; specifically, delete the line "const
TOTAL_LOG_LENGTH_LIMIT: usize = 16384;" in mt_transfer_resolve_gas.rs and add a
use/import for TOTAL_LOG_LENGTH_LIMIT from the defuse::nep245 module (matching
how mt_deposit_resolve_gas.rs imports it) so the test reuses the publicly
exported constant instead of redefining it.
ad6f322 to
63dfb0d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @.cargo/audit.toml:
- Around line 5-10: Update the placeholder upstream issue link in the ignore
note: replace "https://github.com/near/nearcore/issues/XXXXX (upstream)" with
the actual upstream PR URL "https://github.com/jeromefroe/lru-rs/pull/224
(merged January 7, 2026)" so the tracked line accurately references the upstream
fix for the lru advisory in the ignore comment near the ignore =
["RUSTSEC-2026-0002"] entry.
♻️ Duplicate comments (7)
Makefile.toml (1)
211-213:run-tests-longstill applies--features=longto the entire workspace.This will fail for workspace crates that don’t define the feature. Scope the run to packages that declare
long, or run per-crate with-p/--packagefilters.#!/bin/bash set -euo pipefail python - <<'PY' import glob, pathlib, tomllib root = pathlib.Path("Cargo.toml") data = tomllib.loads(root.read_text()) members = data.get("workspace", {}).get("members", []) def expand(member): if any(ch in member for ch in "*?[]"): return [pathlib.Path(p) for p in glob.glob(member)] return [pathlib.Path(member)] missing = [] for m in members: for path in expand(m): cargo = path / "Cargo.toml" if not cargo.exists(): continue pkg = tomllib.loads(cargo.read_text()) features = pkg.get("features", {}) if "long" not in features: missing.append(str(cargo)) print("\n".join(sorted(set(missing))) or "All workspace packages define feature 'long'") PYtests/contracts/multi-token-receiver-stub/src/lib.rs (1)
13-19: Guardstub_return_byteslength and avoidU128 → u64truncation/panic.The stub trusts the input length and uses unchecked casts; this can allocate huge buffers or panic before the intended limit checks.
🛠 Suggested guardrails
pub extern "C" fn stub_return_bytes() { if let Some(input) = near_sdk::env::input() { if input.len() >= 8 { - let len = u64::from_le_bytes(input[..8].try_into().unwrap()) as usize; - let bytes = vec![0xf0u8; len]; + const MAX_RETURN_BYTES: usize = 4 * 1024 * 1024; + let len = u64::from_le_bytes(input[..8].try_into().unwrap()); + let len = usize::try_from(len) + .ok() + .filter(|l| *l <= MAX_RETURN_BYTES) + .unwrap_or_else(|| env::panic_str("return length too large")); + let bytes = vec![0xf0u8; len]; near_sdk::env::value_return(&bytes); } } } @@ - MTReceiverMode::ReturnBytes(len) => Promise::new(env::current_account_id()) - .stub_return_bytes(len.0.try_into().unwrap()) - .into(), + MTReceiverMode::ReturnBytes(len) => { + let len = u64::try_from(len.0) + .unwrap_or_else(|_| env::panic_str("ReturnBytes length overflows u64")); + Promise::new(env::current_account_id()) + .stub_return_bytes(len) + .into() + }Also applies to: 99-101
tests/src/tests/defuse/tokens/nep245/mod.rs (1)
7-26: Handlemid == 0to avoid usize underflow.When the test fails at
mid == 0,hi = mid - 1wraps tousize::MAX.🛠 Safer update for the Err branch
Err(_) => { - hi = mid - 1; // failure -> try lower + if mid == 0 { + break; + } + hi = mid - 1; // failure -> try lower }#!/bin/bash # Find call sites to confirm whether low can be 0 rg -n --type=rs -C3 "binary_search_max\s*\(" teststests/src/tests/defuse/tokens/nep245/mt_transfer_resolve_gas.rs (1)
341-345: Reuse the shared log-length limit constant.Line 344 uses
TOTAL_LOG_LENGTH_LIMIT; to avoid drift, import it fromdefuse::nep245and remove the local constant (Line 26). This was raised before and still applies.♻️ Proposed fix
use defuse::{ core::token_id::{TokenId, nep245::Nep245TokenId}, - nep245::{MtEvent, MtTransferEvent}, + nep245::{MtEvent, MtTransferEvent, TOTAL_LOG_LENGTH_LIMIT}, }; @@ -const TOTAL_LOG_LENGTH_LIMIT: usize = 16384;nep245/src/events.rs (1)
3-5: Avoid a duplicateTOTAL_LOG_LENGTH_LIMITdefinition.Line 16 redefines a constant already exported by
defuse-near-utils. Re-export it here instead of redefining to keep a single source of truth.♻️ Proposed fix
-use near_sdk::{ +use near_sdk::{ AccountIdRef, AsNep297Event, FunctionError, json_types::U128, near, serde::Deserialize, }; @@ -/// NEAR protocol limit for log messages (16 KiB) -pub const TOTAL_LOG_LENGTH_LIMIT: usize = 16384; +/// NEAR protocol limit for log messages (16 KiB) +pub use defuse_near_utils::TOTAL_LOG_LENGTH_LIMIT;Also applies to: 16-17
tests/src/tests/defuse/tokens/nep245/mt_deposit_resolve_gas.rs (2)
212-221: Always validate resolve receipt presence.Line 212 only checks when
defuse_outcomes.len() == 2; if the count differs, resolve failures can be silently skipped. This was flagged earlier and is still present.✅ Suggested fix
- if defuse_outcomes.len() == 2 { - let resolve_outcome = defuse_outcomes[1].clone(); - let resolve_result = resolve_outcome.into_result(); - assert!( - resolve_result.is_ok(), - "CRITICAL: mt_resolve_deposit callback failed for token_count={token_count}! \ - This indicates insufficient gas allocation in the contract. Error: {:?}", - resolve_result.err() - ); - } + anyhow::ensure!( + defuse_outcomes.len() >= 2, + "Expected at least 2 defuse receipts, got {}", + defuse_outcomes.len() + ); + let resolve_outcome = defuse_outcomes[1].clone(); + let resolve_result = resolve_outcome.into_result(); + assert!( + resolve_result.is_ok(), + "CRITICAL: mt_resolve_deposit callback failed for token_count={token_count}! \ + This indicates insufficient gas allocation in the contract. Error: {:?}", + resolve_result.err() + );
351-352: Fix test name typo (“desposit” → “deposit”).The misspelling in the test name (Line 351) reduces clarity/discoverability. This is a repeat from earlier.
✏️ Proposed fix
-async fn mt_desposit_resolve_can_handle_large_blob_value_returned_from_notification() { +async fn mt_deposit_resolve_can_handle_large_blob_value_returned_from_notification() {
| # Ignore the lru unsound advisory - it comes from near-vm-runner which is | ||
| # locked to lru ^0.12.3 and cannot be updated to the fixed 0.16.3 version. | ||
| # The advisory relates to IterMut's Stacked Borrows violation, which does | ||
| # not affect our usage as we don't use IterMut directly. | ||
| # Tracked: https://github.com/near/nearcore/issues/XXXXX (upstream) | ||
| ignore = ["RUSTSEC-2026-0002"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
RUSTSEC-2026-0002 lru upstream issue github
💡 Result:
RUSTSEC-2026-0002 is an unsoundness advisory for the Rust crate lru: its IterMut::next / next_back temporarily create an exclusive reference that violates Stacked Borrows by invalidating an internal pointer held by the HashMap. Affected versions are >= 0.9.0 and < 0.16.3; fixed in lru >= 0.16.3. [1][3]
Upstream GitHub fix (PR):
https://github.com/jeromefroe/lru-rs/pull/224
Merged January 7, 2026. [1][2]
Related alias: GHSA-rhfx-m35p-ff5j. [1][3]
Sources: [1] [2] [3]
Replace the placeholder upstream issue link with the actual upstream PR.
The ignore note references an issues/XXXXX placeholder. Link to the actual upstream fix: jeromefroe/lru-rs#224 (merged January 7, 2026).
🤖 Prompt for AI Agents
In @.cargo/audit.toml around lines 5 - 10, Update the placeholder upstream issue
link in the ignore note: replace "https://github.com/near/nearcore/issues/XXXXX
(upstream)" with the actual upstream PR URL
"https://github.com/jeromefroe/lru-rs/pull/224 (merged January 7, 2026)" so the
tracked line accurately references the upstream fix for the lru advisory in the
ignore comment near the ignore = ["RUSTSEC-2026-0002"] entry.
63dfb0d to
851ca7f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@nep245/src/events.rs`:
- Around line 45-91: The memo length must account for JSON escaping: update
refund_log_delta to compute the memo's JSON-escaped byte length (e.g., via
serde_json::to_string(m) and subtracting the surrounding quotes) instead of raw
m.len(), and use that escaped length when computing overhead and savings; also
precompute the escaped length for the constant REFUND_MEMO (adjust
REFUND_STR_LEN or add a new ESCAPED_REFUND_STR_LEN) and ensure
REFUND_EXTRA_BYTES still reflects the surrounding JSON fragments, adding
serde_json as a dependency if not present, while keeping compute_refund_delta
and the RefundLogDelta logic unchanged.
| const REFUND_EXTRA_BYTES: usize = r#","memo":""#.len() + REFUND_MEMO.len(); | ||
| const REFUND_STR_LEN: usize = REFUND_MEMO.len(); | ||
|
|
||
| #[derive(Default, Clone, Copy)] | ||
| struct RefundLogDelta { | ||
| overhead: usize, | ||
| savings: usize, | ||
| } | ||
|
|
||
| impl RefundLogDelta { | ||
| const fn saturating_add(self, other: Self) -> Self { | ||
| Self { | ||
| overhead: self.overhead.saturating_add(other.overhead), | ||
| savings: self.savings.saturating_add(other.savings), | ||
| } | ||
| } | ||
| } | ||
|
|
||
| const fn refund_log_delta(memo: Option<&str>) -> RefundLogDelta { | ||
| let Some(m) = memo else { | ||
| return RefundLogDelta { | ||
| overhead: REFUND_EXTRA_BYTES, | ||
| savings: 0, | ||
| }; | ||
| }; | ||
| RefundLogDelta { | ||
| overhead: REFUND_STR_LEN.saturating_sub(m.len()), | ||
| savings: m.len().saturating_sub(REFUND_STR_LEN), | ||
| } | ||
| } | ||
|
|
||
| fn compute_refund_delta(event: &MtEvent<'_>) -> RefundLogDelta { | ||
| match event { | ||
| MtEvent::MtMint(events) => events | ||
| .iter() | ||
| .map(|e| refund_log_delta(e.memo.as_deref())) | ||
| .fold(RefundLogDelta::default(), RefundLogDelta::saturating_add), | ||
| MtEvent::MtBurn(events) => events | ||
| .iter() | ||
| .map(|e| refund_log_delta(e.memo.as_deref())) | ||
| .fold(RefundLogDelta::default(), RefundLogDelta::saturating_add), | ||
| MtEvent::MtTransfer(events) => events | ||
| .iter() | ||
| .map(|e| refund_log_delta(e.memo.as_deref())) | ||
| .fold(RefundLogDelta::default(), RefundLogDelta::saturating_add), | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "events.rs" | head -20Repository: near/intents
Length of output: 131
🏁 Script executed:
git ls-files | grep -E "events\.rs|event" | head -20Repository: near/intents
Length of output: 174
🏁 Script executed:
cat -n nep245/src/events.rs | head -150Repository: near/intents
Length of output: 5844
🏁 Script executed:
cat -n nep245/src/events.rs | sed -n '150,200p'Repository: near/intents
Length of output: 2239
🏁 Script executed:
rg "to_event_log|to_nep297_event" nep245/src/events.rs -A 5 -B 5Repository: near/intents
Length of output: 2609
🏁 Script executed:
rg "to_event_log\|to_nep297_event" --type rust -B 3 -A 10 | head -80Repository: near/intents
Length of output: 38
🏁 Script executed:
rg "trait.*Nep297Event|impl.*Nep297Event" --type rust -A 10 | head -50Repository: near/intents
Length of output: 38
🏁 Script executed:
rg "AsNep297Event" --type rust -B 5 -A 10 | head -100Repository: near/intents
Length of output: 7847
🏁 Script executed:
rg "fn to_event_log" --type rust -B 2 -A 15 | head -50Repository: near/intents
Length of output: 38
🏁 Script executed:
rg "to_event_log" --type rust -B 2 -A 5Repository: near/intents
Length of output: 18709
🏁 Script executed:
find . -type f -name "*.rs" -path "*/tests/*" | xargs grep -l "refund_log_delta\|REFUND_MEMO" | head -5Repository: near/intents
Length of output: 232
🏁 Script executed:
rg "REFUND_MEMO" --type rust -B 2 -A 2Repository: near/intents
Length of output: 9241
🏁 Script executed:
find . -type f -name "*.rs" | xargs grep -l "defuse_near_utils" | head -5Repository: near/intents
Length of output: 275
🏁 Script executed:
python3 << 'EOF'
import json
# Check how JSON encodes various strings
test_cases = [
"refund",
"hello",
'test"quote',
"test\\backslash",
"test\ncontrol",
"test\ttab",
]
for test in test_cases:
raw_len = len(test)
json_serialized = json.dumps(test)
json_len = len(json_serialized)
escaped_content_len = json_len - 2 # Remove quotes added by json.dumps
print(f"String: {test!r}")
print(f" Raw length: {raw_len}")
print(f" JSON serialized: {json_serialized}")
print(f" JSON length (with quotes): {json_len}")
print(f" JSON length (without quotes): {escaped_content_len}")
print(f" Difference: {escaped_content_len - raw_len}")
print()
EOFRepository: near/intents
Length of output: 142
🏁 Script executed:
cat near-utils/src/event.rsRepository: near/intents
Length of output: 405
🏁 Script executed:
cat -n nep245/src/events.rs | sed -n '200,300p'Repository: near/intents
Length of output: 5021
🏁 Script executed:
rg "memo" nep245/src/events.rs -B 2 -A 2 | head -80Repository: near/intents
Length of output: 2990
🏁 Script executed:
cat -n nep245/src/events.rs | sed -n '300,400p'Repository: near/intents
Length of output: 3799
🏁 Script executed:
git log --oneline nep245/src/events.rs | head -20Repository: near/intents
Length of output: 94
🏁 Script executed:
cat -n nep245/src/events.rs | sed -n '63,74p'Repository: near/intents
Length of output: 495
🏁 Script executed:
rg "fn check_refund|refund_log_delta" nep245/src/events.rs -B 3 -A 15 | head -100Repository: near/intents
Length of output: 2179
🏁 Script executed:
cd nep245 && cargo build 2>&1 | head -20Repository: near/intents
Length of output: 165
🏁 Script executed:
find . -name "Cargo.toml" -path "*/nep245/*" | head -1Repository: near/intents
Length of output: 75
🏁 Script executed:
rg "memo.*Option" nep245/src/events.rs -B 3 -A 3Repository: near/intents
Length of output: 1706
🏁 Script executed:
cat nep245/Cargo.tomlRepository: near/intents
Length of output: 463
Account for JSON escaping in memo length calculations for refund delta.
The refund_log_delta function uses raw m.len(), but when memos are serialized to JSON, characters like ", \, and control chars get escaped, increasing their byte length. For memos without special characters (like "refund"), this has no effect. However, a memo like foo"bar (3 raw bytes) becomes 4 bytes in JSON (foo\"bar), causing the delta calculation to underestimate the refund log size. This can allow events near the limit to pass validation but then be rejected by the NEAR runtime.
Consider using the JSON-escaped length when calculating overhead/savings for more accurate predictions. Current tests only use memos without escapable characters, leaving this edge case uncovered.
🤖 Prompt for AI Agents
In `@nep245/src/events.rs` around lines 45 - 91, The memo length must account for
JSON escaping: update refund_log_delta to compute the memo's JSON-escaped byte
length (e.g., via serde_json::to_string(m) and subtracting the surrounding
quotes) instead of raw m.len(), and use that escaped length when computing
overhead and savings; also precompute the escaped length for the constant
REFUND_MEMO (adjust REFUND_STR_LEN or add a new ESCAPED_REFUND_STR_LEN) and
ensure REFUND_EXTRA_BYTES still reflects the surrounding JSON fragments, adding
serde_json as a dependency if not present, while keeping compute_refund_delta
and the RefundLogDelta logic unchanged.
Summary by CodeRabbit
New Features
Bug Fixes
Tests
✏️ Tip: You can customize this high-level summary in your review settings.